8 research outputs found

    Secrets Revealed in Container Images: An Internet-wide Study on Occurrence and Impact

    Full text link
    Containerization allows bundling applications and their dependencies into a single image. The containerization framework Docker eases the use of this concept and enables sharing images publicly, gaining high momentum. However, it can lead to users creating and sharing images that include private keys or API secrets-either by mistake or out of negligence. This leakage impairs the creator's security and that of everyone using the image. Yet, the extent of this practice and how to counteract it remains unclear. In this paper, we analyze 337,171 images from Docker Hub and 8,076 other private registries unveiling that 8.5% of images indeed include secrets. Specifically, we find 52,107 private keys and 3,158 leaked API secrets, both opening a large attack surface, i.e., putting authentication and confidentiality of privacy-sensitive data at stake and even allow active attacks. We further document that those leaked keys are used in the wild: While we discovered 1,060 certificates relying on compromised keys being issued by public certificate authorities, based on further active Internet measurements, we find 275,269 TLS and SSH hosts using leaked private keys for authentication. To counteract this issue, we discuss how our methodology can be used to prevent secret leakage and reuse.Comment: 15 pages, 7 figure

    Assessing the Security of OPC UA Deployments

    Get PDF
    To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors

    Dataset to "Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments"

    No full text
    This is the dataset to "Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments" [In ACM Internet Measurement Conference (IMC ’20)]. It contains our weekly scanning results between 2020-02-09 and 2020-08-31 complied using our zgrab2 extensions, i.e, it contains an Internet-wide view on OPC UA deployments and their security configurations. To compile the dataset, we anonymized the output of zgrab2, i.e., we removed host and network identifiers from that dataset. More precisely, we mapped all IP addresses, fully qualified hostnames, and autonomous system IDs to numbers as well as removed certificates containing any identifiers. See the README file for more information. Using this dataset we showed that 93% of Internet-facing OPC UA deployments have problematic security configurations, e.g., missing access control (on 24% of hosts), disabled security functionality (24%), or use of deprecated cryptographic primitives (25%). Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, with the analysis of this dataset we underpinned that secure protocols, in general, are no guarantee for secure deployments if they need to be configured correctly following regularly updated guidelines that account for basic primitives losing their security promises

    Privacy-Preserving Remote Knowledge System

    No full text

    A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection

    Full text link
    Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations.As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99%. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2% and 14.7% for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.Comment: ACM CPSS'2
    corecore